检索演示的生成模型比独立语言模型提供了许多好处:除了对给定查询的文字答案外,它们还提供了从可更新知识库中检索到的出处项目。但是,它们也是更复杂的系统,需要处理长输入。在这项工作中,我们介绍了FID Light,以强烈提高最先进的检索功能模型的效率,同时保持相同的有效性。我们的FID光模型将信息流从编码器(分别编码段落)限制为解码器(使用串联编码表示)。此外,我们通过文本源指针通过重新排列的功能调整FID光,以提高排名最高的出处精度。我们对七个知识密集任务(KILT)的各种实验表明,FID光线始终改善了查询潜伏期和有效性之间的帕累托前沿。带有源指向的FID光设置为六个苏格兰短裙任务的新最新结果,用于合并文本生成和出处检索评估,同时保持合理的效率。
translated by 谷歌翻译
本文研究了针对知识密集型任务的检索型生成模型的多任务培训。我们建议通过利用知识密集型一代的独特属性来清理设定的训练:查询 - 答案对与知识库中的项目的联系。我们通过对相关性标签的信心阈值过滤训练示例,无论一对是通过知识库而回答的。我们在苏格兰语基准的七个组合任务上训练一个单一的数字化(FID)发电机。实验结果表明,我们的简单而有效的方法基本上改善了两个强烈不平衡任务的基线。并显示其余任务的改进较小或没有重大回归。此外,我们通过相关性标签采样量表很好地展示了我们的多任务培训,并且具有增加的模型容量,并实现了最先进的训练,并在七个苏格兰短裙任务中五个。
translated by 谷歌翻译
最近,几种密集的检索(DR)模型已经证明了在搜索系统中无处不在的基于术语的检索的竞争性能。与基于术语的匹配相反,DR将查询和文档投影到密集的矢量空间中,并通过(大约)最近的邻居搜索检索结果。部署新系统(例如DR)不可避免地涉及其性能方面的权衡。通常,建立的检索系统按照效率和成本(例如查询延迟,索引吞吐量或存储要求)对其进行了良好的理解。在这项工作中,我们提出了一个具有一组标准的框架,这些框架超出了简单的有效性措施,可以彻底比较两个检索系统,并明确目标是评估一个系统的准备就绪,以取代另一个系统。这包括有效性和各种成本因素之间的仔细权衡考虑。此外,我们描述了护栏标准,因为即使是平均而言更好的系统,也可能会对少数查询产生系统性故障。护栏检查某些查询特性和新型故障类型的故障,这些故障仅在密集检索系统中才有可能。我们在网络排名方案上演示了我们的决策框架。在这种情况下,最先进的DR模型的结果令人惊讶,不仅是平均表现,而且通过一系列广泛的护栏测试,表现出不同的查询特性,词汇匹配,概括和回归次数的稳健性。无法预测将来博士是否会变得无处不在,但是这是一种可能的方法是通过重复应用决策过程(例如此处介绍的过程)。
translated by 谷歌翻译
我们为最近发布的TripClick Health Ad-Hoc检索集收集提供了强大的变压器的重新排名和密集的检索基线。我们完善了 - 最初的训练数据,具有简单的负面抽样政策。在TripClick的重新排名任务中,我们在重新排名任务中获得了大幅提升,这是用原始基线实现的。此外,我们研究不同域特定的预训练模型在TripClick上的影响。最后,我们表明密集检索优于BM25,即使是简单的训练程序。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
The following article presents a memetic algorithm with applying deep reinforcement learning (DRL) for solving practically oriented dual resource constrained flexible job shop scheduling problems (DRC-FJSSP). In recent years, there has been extensive research on DRL techniques, but without considering realistic, flexible and human-centered shopfloors. A research gap can be identified in the context of make-to-order oriented discontinuous manufacturing as it is often represented in medium-size companies with high service levels. From practical industry projects in this domain, we recognize requirements to depict flexible machines, human workers and capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material (BOM) manufacturing, sequence-depended setup times and (partially) automated tasks. On the other hand, intensive research has been done on metaheuristics in the context of DRC-FJSSP. However, there is a lack of suitable and generic scheduling methods that can be holistically applied in sociotechnical production and assembly processes. In this paper, we first formulate an extended DRC-FJSSP induced by the practical requirements mentioned. Then we present our proposed hybrid framework with parallel computing for multicriteria optimization. Through numerical experiments with real-world data, we confirm that the framework generates feasible schedules efficiently and reliably. Utilizing DRL instead of random operations leads to better results and outperforms traditional approaches.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
We present NusaCrowd, a collaborative initiative to collect and unite existing resources for Indonesian languages, including opening access to previously non-public resources. Through this initiative, we have has brought together 137 datasets and 117 standardized data loaders. The quality of the datasets has been assessed manually and automatically, and their effectiveness has been demonstrated in multiple experiments. NusaCrowd's data collection enables the creation of the first zero-shot benchmarks for natural language understanding and generation in Indonesian and its local languages. Furthermore, NusaCrowd brings the creation of the first multilingual automatic speech recognition benchmark in Indonesian and its local languages. Our work is intended to help advance natural language processing research in under-represented languages.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译